Goto

Collaborating Authors

 embedding symbolic knowledge


Embedding Symbolic Knowledge into Deep Networks

Neural Information Processing Systems

In this work, we aim to leverage prior symbolic knowledge to improve the performance of deep models. We propose a graph embedding network that projects propositional formulae (and assignments) onto a manifold via an augmented Graph Convolutional Network (GCN). To generate semantically-faithful embeddings, we develop techniques to recognize node heterogeneity, and semantic regularization that incorporate structural constraints into the embedding. Experiments show that our approach improves the performance of models trained to perform entailment checking and visual relation prediction. Interestingly, we observe a connection between the tractability of the propositional theory representation and the ease of embedding.


Embedding Symbolic Knowledge into Deep Networks

Neural Information Processing Systems

In this work, we aim to leverage prior symbolic knowledge to improve the performance of deep models. We propose a graph embedding network that projects propositional formulae (and assignments) onto a manifold via an augmented Graph Convolutional Network (GCN). To generate semantically-faithful embeddings, we develop techniques to recognize node heterogeneity, and semantic regularization that incorporate structural constraints into the embedding. Experiments show that our approach improves the performance of models trained to perform entailment checking and visual relation prediction. Interestingly, we observe a connection between the tractability of the propositional theory representation and the ease of embedding.


Reviews: Embedding Symbolic Knowledge into Deep Networks

Neural Information Processing Systems

This paper introduces a method for incorporating prior knowledge encoded as logical rules to improve the performance of deep learning models. In particular, it takes logical rules which are in decomposable and deterministic negation normal form (d-DNNF), and proposes using an augmented graph convolution network to embed them into a vector space. This embedding is then regularised according to the logical constraints, allowing the addition of a "logic loss" term to train models obeying these logical rules. Incorporating (symbolic) background knowledge to improve performance of deep learning methods is an interesting and valuable direction, and from the experiments using a d-DNNF rather than a CNF appears to be beneficial. However, for me the notion of using a d-DNNF as the source of background knowledge raises a few issues which I feel are not addressed in the paper.


Reviews: Embedding Symbolic Knowledge into Deep Networks

Neural Information Processing Systems

The paper describes a novel way of regularizing a deep neural network to be semantically similar to a logical formula. The main contribution is the use of d-DDNF, a particular format for formula which is well-suited to embedding in a graph CNN. This format is used in certain reasoning tasks, but is not widely known in the NeurIPS community, and (to this metareviewer at least) seeing it in this context was surprising and insightful. It also seems to be a trick that could be useful across a range of tasks. It is shown that the model improves performance on synthetic data and a non-trivial realistic task, visual relation prediction.


Embedding Symbolic Knowledge into Deep Networks

Neural Information Processing Systems

In this work, we aim to leverage prior symbolic knowledge to improve the performance of deep models. We propose a graph embedding network that projects propositional formulae (and assignments) onto a manifold via an augmented Graph Convolutional Network (GCN). To generate semantically-faithful embeddings, we develop techniques to recognize node heterogeneity, and semantic regularization that incorporate structural constraints into the embedding. Experiments show that our approach improves the performance of models trained to perform entailment checking and visual relation prediction. Interestingly, we observe a connection between the tractability of the propositional theory representation and the ease of embedding.


Embedding Symbolic Knowledge into Deep Networks

Xie, Yaqi, Xu, Ziwei, Kankanhalli, Mohan S., Meel, Kuldeep S, Soh, Harold

Neural Information Processing Systems

In this work, we aim to leverage prior symbolic knowledge to improve the performance of deep models. We propose a graph embedding network that projects propositional formulae (and assignments) onto a manifold via an augmented Graph Convolutional Network (GCN). To generate semantically-faithful embeddings, we develop techniques to recognize node heterogeneity, and semantic regularization that incorporate structural constraints into the embedding. Experiments show that our approach improves the performance of models trained to perform entailment checking and visual relation prediction. Interestingly, we observe a connection between the tractability of the propositional theory representation and the ease of embedding.